GTAdam: Gradient Tracking With Adaptive Momentum for Distributed Online Optimization
نویسندگان
چکیده
This paper deals with a network of computing agents aiming to solve an online optimization problem in distributed fashion, i.e., by means local computation and communication, without any central coordinator. We propose the gradient tracking adaptive momentum estimation (GTAdam) algorithm, which combines mechanism first second order estimates gradient. The algorithm is analyzed setting for strongly convex cost functions Lipschitz continuous gradients. provide upper bound dynamic regret given term related initial conditions another temporal variations objective functions. Moreover, linear convergence rate guaranteed static setup. tested on time-varying classification problem, (moving) target localization stochastic setup from image classification. In these numerical experiments multi-agent learning, GTAdam outperforms state-of-the-art methods.
منابع مشابه
Convergence Analysis of Proximal Gradient with Momentum for Nonconvex Optimization
A. Proof of Theorem 1 We first recall the following lemma. Lemma 1 (Lemma 1, (Gong et al., 2013)). Under Assumption 1.{3}. For any η > 0 and any x,y ∈ R such that x = proxηg(y − η∇f(y)), one has that F (x) ≤ F (y)− ( 1 2η − L 2 )‖x− y‖ . Applying Lemma 1 with x = xk,y = yk, we obtain that F (xk) ≤ F (yk)− ( 1 2η − L 2 )‖xk − yk‖ . (12) Since η < 1 L , it follows that F (xk) ≤ F (yk). Moreover, ...
متن کاملConvergence Analysis of Proximal Gradient with Momentum for Nonconvex Optimization
In many modern machine learning applications, structures of underlying mathematical models often yield nonconvex optimization problems. Due to the intractability of nonconvexity, there is a rising need to develop efficient methods for solving general nonconvex problems with certain performance guarantee. In this work, we investigate the accelerated proximal gradient method for nonconvex program...
متن کاملADAPTIVE FUZZY TRACKING CONTROL FOR A CLASS OF NONLINEAR SYSTEMS WITH UNKNOWN DISTRIBUTED TIME-VARYING DELAYS AND UNKNOWN CONTROL DIRECTIONS
In this paper, an adaptive fuzzy control scheme is proposed for a class of perturbed strict-feedback nonlinear systems with unknown discrete and distributed time-varying delays, and the proposed design method does not require a priori knowledge of the signs of the control gains.Based on the backstepping technique, the adaptive fuzzy controller is constructed. The main contributions of the paper...
متن کاملAdaptive Online Gradient Descent
We study the rates of growth of the regret in online convex optimization. First, we show that a simple extension of the algorithm of Hazan et al eliminates the need for a priori knowledge of the lower bound on the second derivatives of the observed functions. We then provide an algorithm, Adaptive Online Gradient Descent, which interpolates between the results of Zinkevich for linear functions ...
متن کاملDistributed Stochastic Optimization via Adaptive Stochastic Gradient Descent
Stochastic convex optimization algorithms are the most popular way to train machine learning models on large-scale data. Scaling up the training process of these models is crucial in many applications, but the most popular algorithm, Stochastic Gradient Descent (SGD), is a serial algorithm that is surprisingly hard to parallelize. In this paper, we propose an efficient distributed stochastic op...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Control of Network Systems
سال: 2023
ISSN: ['2325-5870', '2372-2533']
DOI: https://doi.org/10.1109/tcns.2022.3232519